很少有课堂学习(FSCIL)旨在仅用几个样本不断学习新概念,这很容易遭受灾难性的遗忘和过度拟合的问题。旧阶级的无法获得性和新颖样本的稀缺性使实现保留旧知识和学习新颖概念之间的权衡很大。受到不同模型的启发,我们在学习新颖概念时记住了不同的知识,我们提出了一个记忆的补充网络(MCNET),以整合多个模型,以在新任务中相互补充不同的记忆知识。此外,为了用很少的新样本更新模型,我们开发了一个原型平滑的硬矿三元组(PSHT)损失,以将新型样品不仅在当前任务中彼此远离,而且在旧分布中脱颖而出。在三个基准数据集(例如CIFAR100,Miniimagenet和Cub200)上进行了广泛的实验,证明了我们提出的方法的优势。
translated by 谷歌翻译
在过去的几年中,基于卷积的神经网络(CNN)的人群计数方法已取得了有希望的结果。但是,对于准确的计数估计,量表变化问题仍然是一个巨大的挑战。在本文中,我们提出了一个多尺度特征聚合网络(MSFANET),可以在某种程度上减轻此问题。具体而言,我们的方法由两个特征聚合模块组成:短聚合(Shortagg)和Skip Contregation(Skipagg)。 Shortagg模块聚集了相邻卷积块的特征。其目的是制作具有从网络底部逐渐融合的不同接收场的功能。 Skipagg模块将具有小型接受场的特征直接传播到具有更大接收场的特征。它的目的是促进特征与大小接收场的融合。尤其是,Skipagg模块引入了Swin Transformer块中的本地自我注意力特征,以结合丰富的空间信息。此外,我们通过考虑不均匀的人群分布来提出基于局部和全球的计数损失。在四个具有挑战性的数据集(Shanghaitech数据集,UCF_CC_50数据集,UCF-QNRF数据集,WorldExpo'10数据集)上进行了广泛的实验,这表明与先前的先前的尚未实行的方法相比,提出的易于实现的MSFANET可以实现有希望的结果。
translated by 谷歌翻译
快速的基于立体声的3D对象探测器最近在推理时间感到很大进展。然而,它们的精确度远远落后于高精度的方法。我们认为主要原因是快速立体声方法中缺失或差的3D几何特征表示。为了解决这个问题,我们提出了一个有效的几何特征生成网络(EGFN)。我们的EGFN的关键是一种有效且有效的3D几何特征表示(EGFR)模块。在EGFR模块中,首先生成轻量级成本体积特征,然后将其有效地转换为3D空间,并且最后进行图像和3D空间中的多尺度特征,以获得3D几何特征:增强的轻量级voxel特色。此外,我们介绍了一种新的多尺度知识蒸馏策略,以指导多尺度3D几何特征学习。公共基准测试集的实验结果表明,建议的EGFN优于Yolostsereo3D,先进的快速方法,在Map $ 5.16 \%上的$ _ {3d} $以仅需12毫秒的成本,因此实现了更好的权衡立体声3D对象检测的准确性和效率。我们的代码将公开提供。
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
translated by 谷歌翻译
Forecasts by the European Centre for Medium-Range Weather Forecasts (ECMWF; EC for short) can provide a basis for the establishment of maritime-disaster warning systems, but they contain some systematic biases.The fifth-generation EC atmospheric reanalysis (ERA5) data have high accuracy, but are delayed by about 5 days. To overcome this issue, a spatiotemporal deep-learning method could be used for nonlinear mapping between EC and ERA5 data, which would improve the quality of EC wind forecast data in real time. In this study, we developed the Multi-Task-Double Encoder Trajectory Gated Recurrent Unit (MT-DETrajGRU) model, which uses an improved double-encoder forecaster architecture to model the spatiotemporal sequence of the U and V components of the wind field; we designed a multi-task learning loss function to correct wind speed and wind direction simultaneously using only one model. The study area was the western North Pacific (WNP), and real-time rolling bias corrections were made for 10-day wind-field forecasts released by the EC between December 2020 and November 2021, divided into four seasons. Compared with the original EC forecasts, after correction using the MT-DETrajGRU model the wind speed and wind direction biases in the four seasons were reduced by 8-11% and 9-14%, respectively. In addition, the proposed method modelled the data uniformly under different weather conditions. The correction performance under normal and typhoon conditions was comparable, indicating that the data-driven mode constructed here is robust and generalizable.
translated by 谷歌翻译
Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods. In light of these results, it remains unclear for practitioners how to use these methods and choose between them in a principled way. In this paper, we show that for even moderately rich model classes (easily satisfied by neural networks), any feature attribution method that is complete and linear--for example, Integrated Gradients and SHAP--can provably fail to improve on random guessing for inferring model behaviour. Our results apply to common end-tasks such as identifying local model behaviour, spurious feature identification, and algorithmic recourse. One takeaway from our work is the importance of concretely defining end-tasks. In particular, we show that once such an end-task is defined, a simple and direct approach of repeated model evaluations can outperform many other complex feature attribution methods.
translated by 谷歌翻译
Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
translated by 谷歌翻译
Robust Markov decision processes (RMDPs) are promising models that provide reliable policies under ambiguities in model parameters. As opposed to nominal Markov decision processes (MDPs), however, the state-of-the-art solution methods for RMDPs are limited to value-based methods, such as value iteration and policy iteration. This paper proposes Double-Loop Robust Policy Gradient (DRPG), the first generic policy gradient method for RMDPs with a global convergence guarantee in tabular problems. Unlike value-based methods, DRPG does not rely on dynamic programming techniques. In particular, the inner-loop robust policy evaluation problem is solved via projected gradient descent. Finally, our experimental results demonstrate the performance of our algorithm and verify our theoretical guarantees.
translated by 谷歌翻译